Regularized Computation of Approximate Pseudoinverse of Large Matrices Using Low-Rank Tensor Train Decompositions
نویسندگان
چکیده
We propose a new method for low-rank approximation of Moore-Penrose pseudoinverses (MPPs) of large-scale matrices using tensor networks. The computed pseudoinverses can be useful for solving or preconditioning large-scale overdetermined or underdetermined systems of linear equations. The computation is performed efficiently and stably based on the modified alternating least squares (MALS) scheme using lowrank tensor train (TT) decomposition. The large-scale optimization problem is reduced to sequential smaller-scale problems for which any standard and stable algorithms can be applied. Regularization technique is further introduced in order to alleviate illposedness and obtain low-rank solutions. Numerical simulation results illustrate that the pseudoinverses of a wide class of nonsquare or nonsymmetric matrices admit good approximate low-rank TT approximations. It is demonstrated that the computational cost of the proposed method is only logarithmic in the matrix size given that the TTranks of a data matrix and its approximate pseudoinverse are bounded. It is illustrated that a strongly nonsymmetric convection-diffusion problem can be efficiently solved by using the preconditioners computed by the proposed method.
منابع مشابه
Estimating a Few Extreme Singular Values and Vectors for Large-Scale Matrices in Tensor Train Format
We propose new algorithms for singular value decomposition (SVD) of very large-scale matrices based on a low-rank tensor approximation technique called the tensor train (TT) format. The proposed algorithms can compute several dominant singular values and corresponding singular vectors for large-scale structured matrices given in a TT format. The computational complexity of the proposed methods ...
متن کاملTensor Completion by Alternating Minimization under the Tensor Train (TT) Model
Using the matrix product state (MPS) representation of tensor train decompositions, in this paper we propose a tensor completion algorithm which alternates over the matrices (tensors) in the MPS representation. This development is motivated in part by the success of matrix completion algorithms which alternate over the (low-rank) factors. We comment on the computational complexity of the propos...
متن کاملFundamental Tensor Operations for Large-Scale Data Analysis in Tensor Train Formats
We discuss extended definitions of linear and multilinear operations such as Kronecker, Hadamard, and contracted products, and establish links between them for tensor calculus. Then we introduce effective low-rank tensor approximation techniques including Candecomp/Parafac (CP), Tucker, and tensor train (TT) decompositions with a number of mathematical and graphical representations. We also pro...
متن کاملAlgorithm xxx: Reliable Calculation of Numerical Rank, Null Space Bases, Pseudoinverse Solutions, and Basic Solutions using SuiteSparseQR
The SPQR RANK package contains routines that calculate the numerical rank of large, sparse, numerically rank-deficient matrices. The routines can also calculate orthonormal bases for numerical null spaces, approximate pseudoinverse solutions to least squares problems involving rankdeficient matrices, and basic solutions to these problems. The algorithms are based on SPQR from SuiteSparseQR (ACM...
متن کاملMulti-Level Cluster Indicator Decompositions of Matrices and Tensors
A main challenging problem for many machine learning and data mining applications is that the amount of data and features are very large, so that low-rank approximations of original data are often required for efficient computation. We propose new multi-level clustering based low-rank matrix approximations which are comparable and even more compact than Singular Value Decomposition (SVD). We ut...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- SIAM J. Matrix Analysis Applications
دوره 37 شماره
صفحات -
تاریخ انتشار 2016